5 research outputs found
VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction
The success of the Neural Radiance Fields (NeRF) in novel view synthesis has
inspired researchers to propose neural implicit scene reconstruction. However,
most existing neural implicit reconstruction methods optimize per-scene
parameters and therefore lack generalizability to new scenes. We introduce
VolRecon, a novel generalizable implicit reconstruction method with Signed Ray
Distance Function (SRDF). To reconstruct the scene with fine details and little
noise, VolRecon combines projection features aggregated from multi-view
features, and volume features interpolated from a coarse global feature volume.
Using a ray transformer, we compute SRDF values of sampled points on a ray and
then render color and depth. On DTU dataset, VolRecon outperforms SparseNeuS by
about 30% in sparse view reconstruction and achieves comparable accuracy as
MVSNet in full view reconstruction. Furthermore, our approach exhibits good
generalization performance on the large-scale ETH3D benchmark
3D Textured Shape Recovery with Learned Geometric Priors
3D textured shape recovery from partial scans is crucial for many real-world
applications. Existing approaches have demonstrated the efficacy of implicit
function representation, but they suffer from partial inputs with severe
occlusions and varying object types, which greatly hinders their application
value in the real world. This technical report presents our approach to address
these limitations by incorporating learned geometric priors. To this end, we
generate a SMPL model from learned pose prediction and fuse it into the partial
input to add prior knowledge of human bodies. We also propose a novel
completeness-aware bounding box adaptation for handling different levels of
scales and partialness of partial scans.Comment: 5 pages, 3 figures, 2 table
Development of Flexible Robot Skin for Safe and Natural Human–Robot Collaboration
For industrial manufacturing, industrial robots are required to work together with human counterparts on certain special occasions, where human workers share their skills with robots. Intuitive human⁻robot interaction brings increasing safety challenges, which can be properly addressed by using sensor-based active control technology. In this article, we designed and fabricated a three-dimensional flexible robot skin made by the piezoresistive nanocomposite based on the need for enhancement of the security performance of the collaborative robot. The robot skin endowed the YuMi robot with a tactile perception like human skin. The developed sensing unit in the robot skin showed the one-to-one correspondence between force input and resistance output (percentage change in impedance) in the range of 0⁻6.5 N. Furthermore, the calibration result indicated that the developed sensing unit is capable of offering a maximum force sensitivity (percentage change in impedance per Newton force) of 18.83% N−1 when loaded with an external force of 6.5 N. The fabricated sensing unit showed good reproducibility after loading with cyclic force (0⁻5.5 N) under a frequency of 0.65 Hz for 3500 cycles. In addition, to suppress the bypass crosstalk in robot skin, we designed a readout circuit for sampling tactile data. Moreover, experiments were conducted to estimate the contact/collision force between the object and the robot in a real-time manner. The experiment results showed that the implemented robot skin can provide an efficient approach for natural and secure human⁻robot interaction
Self-Calibrated Multi-Sensor Wearable for Hand Tracking and Modeling
We present a multi-sensor system for consistent 3D hand pose tracking and modeling that leverages the advantages of both wearable and optical sensors. Specifically, we employ a stretch-sensing soft glove and three IMUs in combination with an RGB-D camera. Different sensor modalities are fused based on the availability and confidence estimation, enabling seamless hand tracking in challenging environments with partial or even complete occlusion. To maximize the accuracy while maintaining high ease-of-use, we propose an automated user calibration that uses the RGB-D camera data to refine both the glove mapping model and the multi-IMU system parameters. Extensive experiments show that our setup outperforms the wearable-only approaches when the hand is in the field-of-view and outplays the camera-only methods when the hand is occluded.ISSN:1077-2626ISSN:1941-0506ISSN:2160-930